6 research outputs found
Deep Learning with Dynamically Weighted Loss Function for Sensor-Based Prognostics and Health Management
Deep learning has been employed to prognostic and health management of automotive and aerospace with promising results. Literature in this area has revealed that most contributions regarding deep learning is largely focused on the modelâs architecture. However, contributions regarding improvement of different aspects in deep learning, such as custom loss function for prognostic and health management are scarce. There is therefore an opportunity to improve upon the effectiveness of deep learning for the systemâs prognostics and diagnostics without modifying the modelsâ architecture. To address this gap, the use of two different dynamically weighted loss functions, a newly proposed weighting mechanism and a focal loss function for prognostics and diagnostics task are investigated. A dynamically weighted loss function is expected to modify the learning process by augmenting the loss function with a weight value corresponding to the learning error of each data instance. The objective is to force deep learning models to focus on those instances where larger learning errors occur in order to improve their performance. The two loss functions used are evaluated using four popular deep learning architectures, namely, deep feedforward neural network, one-dimensional convolutional neural network, bidirectional gated recurrent unit and bidirectional long short-term memory on the commercial modular aero-propulsion system simulation data from NASA and air pressure system failure data for Scania trucks. Experimental results show that dynamically-weighted loss functions helps us achieve significant improvement for remaining useful life prediction and fault detection rate over non-weighted loss function predictions
Deep learning approaches to aircraft maintenance, repair and overhaul: a review
The use of sensor technology constantly gathering aircrafts' status data has promoted the rapid development of data-driven solutions in aerospace engineering. These methods assist, for instance, with determining appropriate actions for aircraft maintenance, repair and overhaul (MRO). Challenges however are found when dealing with such large amounts of data. Identifying patterns, anomalies and faults disambiguation, with acceptable levels of accuracy and reliability are examples of complex problems in this area. Experiments using deep learning techniques, however, have demonstrated its usefulness in assisting on the analysis aircraft health data. The purpose of this paper therefore is to conduct a survey on deep learning architectures and their application in aircraft MRO. Although deep learning in general is not yet largely exploited for aircraft health, from our search, we identified four main architectures employed to MRO, namely, Deep Autoencoders, Long Short-Term Memory, Convolutional Neural Networks and Deep Belief Networks. For each architecture, we review their main concepts, the types of problems to which these architectures are employed to, the type of data used and their outcomes. We also discuss how research in this area can be advanced by identifying current research gaps and outlining future research opportunities
Deep learning approaches to aircraft maintenance, repair and overhaul: a review
The use of sensor technology constantly gathering aircrafts' status data has promoted the rapid development of data-driven solutions in aerospace engineering. These methods assist, for instance, with determining appropriate actions for aircraft maintenance, repair and overhaul (MRO). Challenges however are found when dealing with such large amounts of data. Identifying patterns, anomalies and faults disambiguation, with acceptable levels of accuracy and reliability are examples of complex problems in this area. Experiments using deep learning techniques, however, have demonstrated its usefulness in assisting on the analysis aircraft health data. The purpose of this paper therefore is to conduct a survey on deep learning architectures and their application in aircraft MRO. Although deep learning in general is not yet largely exploited for aircraft health, from our search, we identified four main architectures employed to MRO, namely, Deep Autoencoders, Long Short-Term Memory, Convolutional Neural Networks and Deep Belief Networks. For each architecture, we review their main concepts, the types of problems to which these architectures are employed to, the type of data used and their outcomes. We also discuss how research in this area can be advanced by identifying current research gaps and outlining future research opportunities
Towards a More Reliable Interpretation of Machine Learning Outputs for Safety-Critical Systems Using Feature Importance Fusion
When machine learning supports decision-making in safety-critical systems, it
is important to verify and understand the reasons why a particular output is
produced. Although feature importance calculation approaches assist in
interpretation, there is a lack of consensus regarding how features' importance
is quantified, which makes the explanations offered for the outcomes mostly
unreliable. A possible solution to address the lack of agreement is to combine
the results from multiple feature importance quantifiers to reduce the variance
of estimates. Our hypothesis is that this will lead to more robust and
trustworthy interpretations of the contribution of each feature to machine
learning predictions. To assist test this hypothesis, we propose an extensible
Framework divided in four main parts: (i) traditional data pre-processing and
preparation for predictive machine learning models; (ii) predictive machine
learning; (iii) feature importance quantification and (iv) feature importance
decision fusion using an ensemble strategy. We also introduce a novel fusion
metric and compare it to the state-of-the-art. Our approach is tested on
synthetic data, where the ground truth is known. We compare different fusion
approaches and their results for both training and test sets. We also
investigate how different characteristics within the datasets affect the
feature importance ensembles studied. Results show that our feature importance
ensemble Framework overall produces 15% less feature importance error compared
to existing methods. Additionally, results reveal that different levels of
noise in the datasets do not affect the feature importance ensembles' ability
to accurately quantify feature importance, whereas the feature importance
quantification error increases with the number of features and number of
orthogonal informative features
Machine learning to determine the main factors affecting creep rates in laser powder bed fusion
There is an increasing need for the use of additive manufacturing (AM) to produce improved critical application engineering components. However, the materials manufactured using AM perform well below their traditionally manufactured counterparts, particularly for creep and fatigue. Research has shown that this difference in performance is due to the complex relationships between AM process parameters which affect the material microstructure and consequently the mechanical performance as well. Therefore, it is necessary to understand the impact of different AM build parameters on the mechanical performance of parts. Machine learning (ML) models are able to find hidden relationships in data using iterative statistical analyses and have the potential to develop processâstructureâpropertyâperformance relationships for manufacturing processes, including AM. The aim of this work is to apply ML techniques to materials testing data in order to understand the effect of AM process parameters on the creep rate of additively built nickel-based superalloy and to predict the creep rate of the material from these process parameters. In this work, the predictive capabilities of ML and its ability to develop processâstructureâproperty relationships are applied to the creep properties of laser powder bed fused alloy 718. The input data for the ML model included the Laser Powder Bed Fusion (LPBF) build parameters usedâbuild orientation, scan strategy and number of lasersâand geometrical material descriptors which were extracted from optical microscope porosity images using image analysis techniques. The ML model was used to predict the minimum creep rate of the Laser Powder Bed Fused alloy 718 samples, which had been creep tested at 650âC and 600MPa. The ML model was also used to identify the most relevant material descriptors affecting the minimum creep rate of the material (determined by using an ensemble feature importance framework). The creep rate was accurately predicted with a percentage error of 1.40 % in the best case. The most important material descriptors were found to be part density, number of pores, build orientation and scan strategy. These findings show the applicability and potential of using ML to determine and predict the mechanical properties of materials fabricated via different manufacturing processes, and to find processâstructureâproperty relationships in AM. This increases the readiness of AM for use in critical applications
Feature importance in machine learning models: A fuzzy information fusion approach
With the widespread use of machine learning to support decision-making, it is increasingly important to verify and understand the reasons why a particular output is produced. Although post-training feature importance approaches assist this interpretation, there is an overall lack of consensus regarding how feature importance should be quantified, making explanations of model predictions unreliable. In addition, many of these explanations depend on the specific machine learning approach employed and on the subset of data used when calculating feature importance. A possible solution to improve the reliability of explanations is to combine results from multiple feature importance quantifiers from different machine learning approaches coupled with re-sampling. Current state-of-the-art ensemble feature importance fusion uses crisp techniques to fuse results from different approaches. There is, however, significant loss of information as these approaches are not context-aware and reduce several quantifiers to a single crisp output. More importantly, their representation of âimportanceâ as coefficients may be difficult to comprehend by end-users and decision makers. Here we show how the use of fuzzy data fusion methods can overcome some of the important limitations of crisp fusion methods by making the importance of features easily understandable